On Accuracy Benchmarks , Metrics , and Test Methods 1 Preface

ثبت نشده
چکیده

This draft report was prepared by NIST staff at the request of the Technical Guidelines Development Committee (TGDC) to serve as a point of discussion at the Dec. 4-5 meeting of the TGDC. Prepared in conjunction with members of a TGDC subcommittee, the report is a discussion draft and does not represent a consensus view or recommendation from either NIST or the TGDC. It reflects the conclusions of NIST research staff for purposes of discussion. The TGDC is an advisory group to the Election Assistance Commission, which produces voluntary voting system guidelines and was established by the Help America Vote Act. NIST serves as a technical advisor to the TGDC. The NIST research and the draft report's conclusions are based on interviews and discussions with election officials, voting system vendors, computer scientists, and other experts in the field, as well as a literature search and the technical expertise of its authors. It is intended to help in developing guidelines for the next generation of electronic voting machine to ensure that these systems are as reliable, accurate, and secure as possible. Issues of certification or decertification of voting systems currently in place are outside the scope of this document and of the TGDC's deliberations. This document identifies problems with the test method for accuracy specified in VVSG'05 and describes some possible solutions. These possible solutions are the result of a preliminary analysis and do not constitute a NIST position or consensus. Harmonization with ongoing work on the similar topic of reliability testing has not yet occurred. We are providing this preliminary material only to keep the committee appraised of our activities and to provide opportunity for early feedback. The informal concept of voting system accuracy is formalized using the ratio of the number of errors that occur to the volume of data processed, also known as error rate. By keeping track of the number of errors and the volume of data over the course of a test campaign, one can trivially calculate the observed cumulative error rate. However, the observed error rate is not necessarily a good indication of the true error rate. The true error rate describes the expected performance of the system in the field, but it cannot be observed in a test campaign of finite duration, using a finite-sized sample. The system submitted for testing is assumed to be a representative sample (see [6] Ch. 8), so the …

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Comparative Study of VHDL Implementation of FT-2D-cGA and FT-3D-cGA on Different Benchmarks (RESEARCH NOTE)

This paper presents the VHDL implementation of fault tolerant cellular genetic algorithm. The goal of paper is to harden the hardware implementation of the cGA against single error upset (SEU), when affecting the fitness registers in the target hardware. The proposed approach, consists of two phases; Error monitoring and error recovery. Using innovative connectivity between processing elements ...

متن کامل

Accuracy Metrics and Benchmarks for Simulations of Deformable Objects

− There are a number of algorithms for interactively modeling deformable objects, such as soft human tissues. Because of the desire for real-time performance, approximations are made to quickly calculate deformations. This paper proposes accuracy metrics and benchmarks to allow a systematic comparison of simulation algorithms for deformable objects in 2D and 3D. We implement and compare the com...

متن کامل

A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design

The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accura...

متن کامل

Designing Benchmarks for P2P Systems

In this paper we discuss requirements for peer-to-peer (P2P) benchmarking, and we present two exemplary approaches to benchmarks for Distributed Hashtables (DHT) and P2P gaming overlays. We point out the characteristics of benchmarks for P2P systems, focusing on the challenges compared to conventional benchmarks. The two benchmarks for very different types of P2P systems are designed applying a...

متن کامل

OOSP: Ontological Benchmarks Made on the Fly

The demo paper presents OOSP (Online Ontology Set Picker), a tool allowing to select, from major repositories, a set of ontologies that satisfy a user-defined sets of metrics. Its main purpose is allowing ontological tool designers to rapidly build custom benchmarks on which they could test different features. It could also serve for usage studies of different ontology language constructs and f...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006